275 research outputs found

    Privacy Games: Optimal User-Centric Data Obfuscation

    Full text link
    In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user's privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm

    Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning

    Full text link
    Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP

    Determination of Metabolic pathways and PPI network of Sarigol in Response to Osmotic stress: An in silico study

    Get PDF
    The complexity of plants response to abiotic stress make difficult to manage and target special genes/proteins to be used in improving crop performance. Therefore, understanding and insight into molecular mechanisms recruited by plants under stressful conditions is essence. In this aim, Sarigol, a salt-sensitive cultivar of canola, based on their differentially expressed proteins was studied in silico. The results indicated that the majority of proteins had molecular function of catalytic activity and involvement of these proteins in response to stress underrepresented by Sarigol, whereas proteins involved in cellular and metabolic process were overrepresented. Phylogenetic analysis divided the proteins into 4 groups and protein-protein interaction network prediction illustrated two sets of interacted proteins, while most of proteins did not show any interactions. The results suggested that in the molecular level, Sarigol is unable to respond appropriate actions as are observed in tolerant plants.

    Bias Propagation in Federated Learning

    Full text link
    We show that participating in federated learning can be detrimental to group fairness. In fact, the bias of a few parties against under-represented groups (identified by sensitive attributes such as gender or race) can propagate through the network to all the parties in the network. We analyze and explain bias propagation in federated learning on naturally partitioned real-world datasets. Our analysis reveals that biased parties unintentionally yet stealthily encode their bias in a small number of model parameters, and throughout the training, they steadily increase the dependence of the global model on sensitive attributes. What is important to highlight is that the experienced bias in federated learning is higher than what parties would otherwise encounter in centralized training with a model trained on the union of all their data. This indicates that the bias is due to the algorithm. Our work calls for auditing group fairness in federated learning and designing learning algorithms that are robust to bias propagation

    Full-Factorial Experimental Design to Determine the Impacts of Influential Parameters on the Porosity and Mechanical Strength of LLDEP Microporous Membrane Fabricated via Thermally Induced Phase Separation Method

    Get PDF
    Membrane separation processes have a wide application in liquid and gas purification industries. They enjoy advantages such as convenient processibility, easy and lower production and operational costs. Thermally induced phase separation (TIPS) process, due to its wide advantages, has won special attention in recent decades. In this process, a homogenous solution of polymer-diluent at a temperature above the polymer melting point is formed and the solution is then cast in the favorite shape. In order to create a porous structure, the diluent is extracted. In this work, microporous LLDPE membrane is fabricated and full factorial experimental design is used to evaluate the individual as well as mutual impacts of polymer concentration, membrane thickness and cooling bath temperature on the porosity and mechanical strength of the membrane. The results obtained from the analysis of variance of membrane porosity and mechanical strength, showed that the impact of cooling bath temperature is much more important than polymer concentration and membrane thickness. Higher cooling bath temperature, lower polymer concentration and membrane thickness result in higher porosity
    corecore